172 research outputs found

    The Intersection of Art and Technology

    Get PDF
    As art influences science and technology, science and technology can in turn inspire art. Recognizing this mutually beneficial relationship, researchers at the Casa Paganini-InfoMus Research Centre work to combine scientific research in information and communications technology (ICT) with artistic and humanistic research. Here, the authors discuss some of their work, showing how their collaboration with artists informed work on analyzing nonverbal expressive and social behavior and contributed to tools, such as the EyesWeb XMI hardware and software platform, that support both artistic and scientific developments. They also sketch out how art-informed multimedia and multimodal technologies find application beyond the arts, in areas including education, cultural heritage, social inclusion, therapy, rehabilitation, and wellness

    Toward a model of computational attention based on expressive behavior: applications to cultural heritage scenarios

    Get PDF
    Our project goals consisted in the development of attention-based analysis of human expressive behavior and the implementation of real-time algorithm in EyesWeb XMI in order to improve naturalness of human-computer interaction and context-based monitoring of human behavior. To this aim, perceptual-model that mimic human attentional processes was developed for expressivity analysis and modeled by entropy. Museum scenarios were selected as an ecological test-bed to elaborate three experiments that focus on visitor profiling and visitors flow regulation

    Multisensory interactive technologies for primary education: From science to technology

    Get PDF
    While technology is increasingly used in the classroom, we observe at the same time that making teachers and students accept it is more difficult than expected. In this work, we focus on multisensory technologies and we argue that the intersection between current challenges in pedagogical practices and recent scientific evidence opens novel opportunities for these technologies to bring a significant benefit to the learning process. In our view, multisensory technologies are ideal for effectively supporting an embodied and enactive pedagogical approach exploiting the best-suited sensory modality to teach a concept at school. This represents a great opportunity for designing technologies, which are both grounded on robust scientific evidence and tailored to the actual needs of teachers and students. Based on our experience in technology-enhanced learning projects, we propose six golden rules we deem important for catching this opportunity and fully exploiting it

    Elckerlyc in practice - on the integration of a BML Realizer in real applications

    Get PDF
    Building a complete virtual human application from scratch is a daunting task, and it makes sense to rely on existing platforms for behavior generation. When building such an interactive application, one needs to be able to adapt and extend the capabilities of the virtual human offered by the platform, without having to make invasive modications to the platform itself. This paper describes how Elckerlyc, a novel platform for controlling a virtual human, offers these possibilities

    Informing bowing and violin learning using movement analysis and machine learning

    Get PDF
    Violin performance is characterized by an intimate connection between the player and her instrument that allows her a continuous control of sound through a sophisticated bowing technique. A great importance in violin pedagogy is, then, given to techniques of the right hand, responsible of most of the sound produced. This study analyses the bowing trajectory in three different classical violin exercises from audio and motion capture recordings to classify, using machine learning techniques, the different kinds of bowing techniques used. Our results show that a clustering algorithm is able to appropriately group together the different shapes produced by the bow trajectories

    From motions to emotions: Classification of Affect from Dance Movements using Deep Learning

    Get PDF
    This work investigates classification of emotions from MoCap full-body data by using Convolutional Neural Networks (CNN). Rather than addressing regular day to day activities, we focus on a more complex type of full-body movement - dance. For this purpose, a new dataset was created which contains short excerpts of the performances of professional dancers who interpreted four emotional states: anger, happiness, sadness, and insecurity. Fourteen minutes of motion capture data are used to explore different CNN architectures and data representations. The results of the four-class classification task are up to 0.79 (F1 score) on test data of other performances by the same dancers. Hence, through deep learning, this paper proposes a novel and effective method of emotion classification which can be exploited in affective interfaces

    Automated detection of impulsive movements in HCI

    Get PDF
    This paper introduces an algorithm for automatically measuring impulsivity. This can be used as a major expressive movement feature in the development of systems for realtime analysis of emotion expression from human full-body movement, a research area which has received increased attention in the affective computing community. In particular, our algorithm is developed in the framework of the EUH2020- ICT Project DANCE aiming at investigating techniques for sensory substitution in blind people, in order to enable perception of and participation in non-verbal, artistic whole-body experiences. The algorithm was tested by applying it to a reference archive of short dance performances. The archive includes a collection of both impulsive and fluid movements. Results show that our algorithm can reliably distinguish impulsive vs. sudden performances
    corecore